Skip to content

Conversation

@fm3
Copy link
Member

@fm3 fm3 commented Sep 30, 2025

URL of deployed dev instance (used for testing):

Steps to test:

  • Proofread some
  • Download annotation, reupload
  • Should see the same proofread state, history should be inaccessible.
  • Test with multiple proofreading layers in one annotation ← unsupported case
  • Try uploading multiple annotations, assertions should block if one has proofreading.

TODOs:

  • Handle history version numbers correctly
  • Do we need to find the initial largestAgglomerateId? how does this work in create;duplicate?
  • Add assertions when uploading multiple annotations (or fully build it?)
  • Unzip the zarr arrays
  • Create update actions from them
  • Apply update actions?
  • Make sure the result is a consistent state.
  • tracingstore now needs chunkContentsCache
  • propagate correct boolean fill_value
  • double-check update actions have correct ordering
  • test with multiple proofreading layers, are versions dense?
  • test with multiple layers only one of which has proofreading, do versions look right?
  • clean up

Issues:


  • Added changelog entry (create a $PR_NUMBER.md file in unreleased_changes or use ./tools/create-changelog-entry.py)
  • Removed dev-only changes like prints and application.conf edits
  • Considered common edge cases
  • Needs datastore update after deployment

@fm3 fm3 self-assigned this Sep 30, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 30, 2025

📝 Walkthrough

Walkthrough

Adds server-side import for editable-mapping (proofreading) annotations from uploaded ZIPs, propagates editable-mapping metadata through upload/NML parsing, introduces tracing-store endpoints and IO to initialize mappings from ZIPs, refactors chunk-cache to trait + DS/TS implementations, adds boolean datastore/Zarr support, config/i18n entries, and small UI/log fixes.

Changes

Cohort / File(s) Summary
Annotation upload & NML
app/controllers/AnnotationIOController.scala, app/models/annotation/AnnotationUploadService.scala, app/models/annotation/nml/NmlParser.scala, app/models/annotation/nml/NmlVolumeTag.scala, unreleased_changes/8969.md
Upload flow extended to carry editable-mapping metadata; UploadedVolumeLayer/NmlVolumeTag gain optional edited-mapping fields; mergeAndSaveVolumeLayers now returns (List[AnnotationLayer], Long) to propagate earliest-accessible-version; added validation and per-layer editable-mapping save flow.
WK tracing-store client
app/models/annotation/WKRemoteTracingStoreClient.scala
Added saveEditableMappingIfPresent(...) to conditionally POST an editable-mapping zip to tracing store or fail when information is incomplete (method appears duplicated in file).
Tracing-store API & IO pipeline
webknossos-tracingstore/.../controllers/EditableMappingController.scala, .../tracings/editablemapping/EditableMappingIOService.scala, webknossos-tracingstore/conf/tracingstore.latest.routes
New POST route and controller action saveFromZip; EditableMappingIOService.initializeFromUploadedZip added to unzip Zarr edits, build versioned update actions and persist them; DI updated to inject tracing datastore, chunk cache, and editable-mapping service.
Temp file / directory support (tracingstore)
webknossos-tracingstore/.../files/TempFileService.scala
create supports directories, added createDirectory, and cleanup handles directory deletion vs file deletion.
Chunk-cache trait refactor & DI wiring
webknossos-datastore/.../DataStoreModule.scala, .../services/ChunkCacheService.scala, .../services/BinaryDataServiceHolder.scala, webknossos-datastore/.../services/*/*.scala, webknossos-tracingstore/.../TSChunkCacheService.scala, webknossos-tracingstore/.../TracingStoreModule.scala
ChunkCacheService converted to a trait with maxSizeBytes; DSChunkCacheService and TSChunkCacheService added; many constructors/imports updated to use DS/TS implementations and DI bindings adjusted.
Boolean data / Zarr & MultiArray support (datastore)
webknossos-datastore/.../datareaders/ArrayDataType.scala, .../ChunkTyper.scala, .../DatasetHeader.scala, .../MultiArrayUtils.scala, .../zarr3/Zarr3ArrayHeader.scala, .../zarr3/Zarr3DataType.scala
Added ArrayDataType.bool and BoolChunkTyper; DatasetHeader.fillValueBoolean; MultiArrayUtils.createFilledArray signature extended to accept numeric+boolean fill parameters; Zarr3 fill parsing broadened to handle boolean and mapping for bool fixed.
RPC file helper
webknossos-datastore/.../rpc/RPCRequest.scala
Added postFileWithJsonResponse[T: Reads](file: File) to POST files and parse JSON responses.
Tracingstore config & module bindings
webknossos-tracingstore/.../TracingStoreConfig.scala, webknossos-tracingstore/conf/standalone-tracingstore.conf, webknossos-tracingstore/.../TracingStoreModule.scala
Added tracingstore cache config key tracingstore.cache.chunkCacheMaxSizeBytes and registered TSChunkCacheService in module bindings.
App config & i18n
conf/application.conf, conf/messages
Added tracingstore.cache.chunkCacheMaxSizeBytes config entry and i18n key annotation.upload.editableMappingIncompleteInformation.
Minor fixes
app/Startup.scala, frontend/javascripts/viewer/view/version_entry.tsx
Replaced typographic ellipses with ASCII in log messages; fixed segment description fallback to use segmentId2.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related issues

Possibly related PRs

Suggested reviewers

  • MichaelBuessemeyer

Poem

I hop through zips and Zarr I peek,
I stitch up edges, version by tweak.
Caches hum low, booleans blink bright,
I save the mapping and nibble a byte.
Rabbit cheers — upload works tonight! 🐇

Pre-merge checks and finishing touches

✅ Passed checks (5 passed)
Check name Status Explanation
Title Check ✅ Passed The title succinctly captures the primary change of enabling reuploading exported editable mapping annotation ZIPs and provides clear context on the new feature without extraneous details.
Linked Issues Check ✅ Passed The changes introduce parsing of the editable mapping ZIP, unzipping and reading Zarr arrays, constructing and applying update actions with correct version handling, enforcing assertions for multi‐annotation uploads, and adding the save endpoint and service implementation, thereby fulfilling all coding objectives of issue #8826.
Out of Scope Changes Check ✅ Passed All modifications, including added boolean support in data types, new chunk cache service bindings, and infrastructure updates, directly support the import and processing of editable mapping ZIPs and no unrelated changes are present.
Description Check ✅ Passed The pull request description outlines the deployment URL, testing steps, implementation todos, and the linked issue, all of which are directly relevant to the editable mapping reupload feature and accurately reflect the changes in the codebase.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch upload-editable-mapping

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 34d7248 and 175666c.

📒 Files selected for processing (2)
  • conf/messages (1 hunks)
  • webknossos-tracingstore/conf/tracingstore.latest.routes (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • conf/messages
  • webknossos-tracingstore/conf/tracingstore.latest.routes
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: build-smoketest-push
  • GitHub Check: frontend-tests
  • GitHub Check: backend-tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@fm3 fm3 changed the title WIP Reupload exported editable mapping annotation zip Reupload exported editable mapping annotation zip Oct 6, 2025
@fm3 fm3 marked this pull request as ready for review October 6, 2025 13:57
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3ArrayHeader.scala (1)

31-31: Clarify comment about boolean fill_value support.

The comment states "Reading boolean datasets is not supported" but the parsing logic (lines 178-189) now reads boolean fill_value from JSON and stores it as a string. Consider updating the comment to reflect this capability more clearly.

Apply this diff to clarify the comment:

-    fill_value: Either[String, Number], // Reading boolean datasets is not supported. When writing boolean, true and false literals will be stored in the string.
+    fill_value: Either[String, Number], // Boolean fill_value is read from JSON and stored as string ("true" or "false"). When writing boolean data type, fill_value is serialized as JSON boolean.
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/MultiArrayUtils.scala (1)

53-61: Handle boolean datasets in createEmpty

ArrayDataType.bool now exists and the rest of the stack can construct boolean chunks, but createEmpty still has no branch for it. Calling createEmpty(ArrayDataType.bool, …) will hit the default path and throw a MatchError, breaking workflows that expect an empty boolean MultiArray.

   def createEmpty(dataType: ArrayDataType, rank: Int): MultiArray = {
     val datyTypeMA = dataType match {
       case ArrayDataType.i1 | ArrayDataType.u1 => MADataType.BYTE
       case ArrayDataType.i2 | ArrayDataType.u2 => MADataType.SHORT
       case ArrayDataType.i4 | ArrayDataType.u4 => MADataType.INT
       case ArrayDataType.i8 | ArrayDataType.u8 => MADataType.LONG
       case ArrayDataType.f4                    => MADataType.FLOAT
       case ArrayDataType.f8                    => MADataType.DOUBLE
+      case ArrayDataType.bool                  => MADataType.BOOLEAN
     }
     MultiArray.factory(datyTypeMA, Array.fill(rank)(0))
   }
🧹 Nitpick comments (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3ArrayHeader.scala (1)

178-189: Simplify flatMap to map for cleaner code.

The flatMap(value => JsSuccess(...)) pattern is unnecessarily verbose. Since you're already in a for-comprehension that expects JsResult, you can use map to transform the value directly.

Apply this diff to simplify the fill_value parsing:

-        fill_value <- (fill_value_raw.validate[String],
-                       fill_value_raw.validate[Number],
-                       fill_value_raw.validate[Boolean]) match {
-          case (asStr: JsSuccess[String], _, _) =>
-            asStr.flatMap(value => JsSuccess[Either[String, Number]](Left(value)))
-          case (_, asNum: JsSuccess[Number], _) =>
-            asNum.flatMap(value => JsSuccess[Either[String, Number]](Right(value)))
-          case (_, _, asBool: JsSuccess[Boolean]) =>
-            asBool.flatMap(value => JsSuccess[Either[String, Number]](Left(value.toString)))
-          case _ => JsError("Could not parse fill_value as string, number or boolean value.")
-        }
+        fill_value <- (fill_value_raw.validate[String],
+                       fill_value_raw.validate[Number],
+                       fill_value_raw.validate[Boolean]) match {
+          case (asStr: JsSuccess[String], _, _) => asStr.map(Left(_))
+          case (_, asNum: JsSuccess[Number], _) => asNum.map(Right(_))
+          case (_, _, asBool: JsSuccess[Boolean]) => asBool.map(v => Left(v.toString))
+          case _ => JsError("Could not parse fill_value as string, number or boolean value.")
+        }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a67f14d and 8fcf0bb.

📒 Files selected for processing (32)
  • app/Startup.scala (2 hunks)
  • app/controllers/AnnotationIOController.scala (3 hunks)
  • app/models/annotation/AnnotationUploadService.scala (1 hunks)
  • app/models/annotation/WKRemoteTracingStoreClient.scala (1 hunks)
  • app/models/annotation/nml/NmlParser.scala (2 hunks)
  • app/models/annotation/nml/NmlVolumeTag.scala (1 hunks)
  • conf/application.conf (1 hunks)
  • conf/messages (1 hunks)
  • frontend/javascripts/viewer/view/version_entry.tsx (2 hunks)
  • unreleased_changes/8969.md (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreModule.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ArrayDataType.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkTyper.scala (4 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/DatasetHeader.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/MultiArrayUtils.scala (2 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3ArrayHeader.scala (3 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3DataType.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/BinaryDataServiceHolder.scala (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (2 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/connectome/ZarrConnectomeFileService.scala (2 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mapping/ZarrAgglomerateService.scala (2 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/ZarrMeshFileService.scala (2 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/segmentindex/ZarrSegmentIndexFileService.scala (2 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (1 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreConfig.scala (1 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreModule.scala (1 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala (4 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TempFileService.scala (1 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (2 hunks)
  • webknossos-tracingstore/conf/standalone-tracingstore.conf (1 hunks)
  • webknossos-tracingstore/conf/tracingstore.latest.routes (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (17)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3DataType.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ArrayDataType.scala (1)
  • ArrayDataType (5-82)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/DatasetHeader.scala (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/dataformats/wkw/WKWHeader.scala (1)
  • fill_value (95-95)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/precomputed/PrecomputedHeader.scala (1)
  • fill_value (54-54)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreConfig.scala (2)
app/utils/WkConf.scala (1)
  • Cache (74-80)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreConfig.scala (2)
  • Cache (32-46)
  • Redis (51-54)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/segmentindex/ZarrSegmentIndexFileService.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (1)
  • DSChunkCacheService (28-30)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/ZarrMeshFileService.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (1)
  • DSChunkCacheService (28-30)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/connectome/ZarrConnectomeFileService.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (1)
  • DSChunkCacheService (28-30)
app/models/annotation/WKRemoteTracingStoreClient.scala (3)
util/src/main/scala/com/scalableminds/util/tools/Fox.scala (7)
  • s (236-240)
  • s (240-250)
  • s (250-259)
  • Fox (30-230)
  • Fox (232-305)
  • successful (53-56)
  • failure (58-62)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala (2)
  • addQueryString (28-31)
  • postFileWithJsonResponse (135-138)
app/controllers/UserTokenController.scala (1)
  • RpcTokenHolder (31-39)
app/models/annotation/nml/NmlParser.scala (1)
frontend/javascripts/viewer/model/accessors/volumetracing_accessor.ts (1)
  • hasEditableMapping (625-634)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ChunkTyper.scala (2)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ArrayDataType.scala (1)
  • ArrayDataType (5-82)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/MultiArrayUtils.scala (2)
  • MultiArrayUtils (11-161)
  • createFilledArray (32-51)
app/controllers/AnnotationIOController.scala (3)
util/src/main/scala/com/scalableminds/util/tools/Fox.scala (8)
  • map (281-284)
  • map (377-377)
  • Fox (30-230)
  • Fox (232-305)
  • successful (53-56)
  • failure (58-62)
  • serialCombined (95-99)
  • serialCombined (99-111)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/models/annotation/AnnotationLayer.scala (5)
  • toProto (19-21)
  • AnnotationLayer (13-21)
  • AnnotationLayer (23-51)
  • AnnotationLayerStatistics (53-70)
  • unknown (69-69)
app/models/annotation/WKRemoteTracingStoreClient.scala (2)
  • saveEditableMappingIfPresent (252-269)
  • saveVolumeTracing (227-250)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreConfig.scala (1)
  • ImageArrayChunks (36-38)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TempFileService.scala (1)
util/src/main/scala/com/scalableminds/util/time/Instant.scala (1)
  • now (48-48)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/MultiArrayUtils.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/ArrayDataType.scala (1)
  • ArrayDataType (5-82)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala (3)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/AccessTokenService.scala (4)
  • webknossos (73-75)
  • validateAccessFromTokenContext (90-97)
  • UserAccessRequest (30-30)
  • UserAccessRequest (31-75)
util/src/main/scala/com/scalableminds/util/time/Instant.scala (2)
  • now (48-48)
  • logSince (72-75)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)
  • initializeFromUploadedZip (131-159)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mapping/ZarrAgglomerateService.scala (1)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (1)
  • DSChunkCacheService (28-30)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreConfig.scala (2)
  • Tracingstore (14-32)
  • Cache (28-30)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (9)
util/src/main/scala/com/scalableminds/util/io/ZipIO.scala (2)
  • ZipIO (15-309)
  • zip (70-87)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datavault/VaultPath.scala (1)
  • VaultPath (19-100)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/helpers/UPath.scala (2)
  • UPath (54-96)
  • fromLocalPath (80-80)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TsTempFileService.scala (1)
  • TsTempFileService (8-11)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/TracingDataStore.scala (1)
  • TracingDataStore (13-66)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingService.scala (1)
  • create (134-141)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TempFileService.scala (1)
  • createDirectory (42-49)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3Array.scala (3)
  • Zarr3Array (16-40)
  • Zarr3Array (42-184)
  • open (18-40)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/DatasetArray.scala (1)
  • readAsMultiArray (189-192)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build-smoketest-push
  • GitHub Check: backend-tests
🔇 Additional comments (21)
app/Startup.scala (1)

101-101: LGTM! Log message standardization.

The Unicode ellipsis (…) has been replaced with ASCII three-dot ellipsis (...) in both log messages. This standardization improves consistency and avoids potential encoding issues.

Also applies to: 118-118

frontend/javascripts/viewer/view/version_entry.tsx (1)

170-170: LGTM! Correctness fix for segment identifier display.

The change correctly aligns segment2Description with segmentId2 instead of incorrectly referencing segmentId1. This ensures that version history entries accurately reflect which segments were involved in split and merge agglomerate operations.

Also applies to: 183-183

webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala (1)

135-138: LGTM! Method follows existing patterns.

The implementation correctly mirrors the existing postFile method (lines 130-133) while adding JSON response parsing, consistent with other *WithJsonResponse methods in the class. The type parameter and return type are appropriate.

Note: Like the existing postFile method, this doesn't explicitly set a Content-Type header. Play's WSClient may set this automatically based on the file, but if the server expects a specific content type (e.g., application/zip for the editable mapping uploads mentioned in the PR), you may want to verify the header is correctly propagated.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/BinaryDataServiceHolder.scala (1)

21-21: LGTM: DI type update aligns with trait refactoring.

The parameter type update from ChunkCacheService to DSChunkCacheService is consistent with the trait-based cache service refactoring across the datastore module.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mapping/ZarrAgglomerateService.scala (1)

16-16: LGTM: Consistent DI migration to DSChunkCacheService.

The import and constructor parameter updates align with the trait-based refactoring of the chunk cache service.

Also applies to: 28-28

webknossos-datastore/app/com/scalableminds/webknossos/datastore/DataStoreModule.scala (1)

64-64: LGTM: DI binding correctly updated.

The binding change from ChunkCacheService to DSChunkCacheService ensures the concrete implementation is wired as an eager singleton, consistent with the trait-based refactoring.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/mesh/ZarrMeshFileService.scala (1)

13-13: LGTM: DI type update consistent with refactoring.

The import and constructor changes align with the DSChunkCacheService migration across the datastore module.

Also applies to: 62-62

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/connectome/ZarrConnectomeFileService.scala (1)

11-11: LGTM: Consistent constructor dependency update.

The migration to DSChunkCacheService follows the established pattern across the datastore services.

Also applies to: 45-45

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/segmentindex/ZarrSegmentIndexFileService.scala (1)

13-13: LGTM: DI update aligns with trait-based refactoring.

The constructor dependency change to DSChunkCacheService is consistent with the module-wide cache service migration.

Also applies to: 53-53

webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/ChunkCacheService.scala (2)

9-26: LGTM: Clean trait extraction improves flexibility.

The refactoring from a concrete class to a trait with protected val maxSizeBytes enables different implementations while preserving the cache initialization logic. This design improves testability and follows the dependency inversion principle.


28-30: LGTM: DSChunkCacheService provides datastore-specific sizing.

The concrete implementation correctly sources maxSizeBytes from the datastore configuration, maintaining the original behavior while enabling the trait-based design.

webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3ArrayHeader.scala (1)

46-50: No changes required for byteOrder detection
Option.contains("big") correctly matches the only big-endian value ("big"), and no other variants exist in the codebase.

conf/application.conf (1)

203-203: LGTM!

The new chunk cache configuration is properly documented and the 2 GB limit aligns with the datastore's image array cache configuration. This will be consumed by the new TSChunkCacheService.

webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreModule.scala (1)

28-28: LGTM!

The DI binding for TSChunkCacheService follows the established pattern for service registration and is correctly configured as an eager singleton.

conf/messages (1)

270-270: LGTM!

The error message is clear, grammatically correct, and provides actionable information about what's missing (file or baseMappingName).

webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreConfig.scala (1)

28-31: LGTM!

The new Cache configuration object follows the same pattern as existing configuration objects (Redis, Fossildb, WebKnossos) and correctly defines the chunk cache size parameter. The children list is properly updated to include the new Cache object.

webknossos-tracingstore/conf/standalone-tracingstore.conf (1)

64-64: LGTM!

The standalone tracingstore configuration correctly mirrors the chunk cache setting from application.conf, ensuring consistent behavior across deployment modes.

webknossos-tracingstore/conf/tracingstore.latest.routes (1)

43-43: LGTM!

The new route for saving editable mappings from ZIP follows RESTful conventions and is properly positioned within the Editable Mappings section. The parameter types are appropriate for the operation.

webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TempFileService.scala (2)

42-49: LGTM!

The new createDirectory method follows the same pattern as the existing create method, with appropriate use of Files.createDirectory and proper tracking in activeTempFiles.


56-59: LGTM!

The updated cleanup logic correctly handles both files and directories. Using FileUtils.deleteDirectory for directories ensures recursive deletion of directory contents, while regular files continue using Files.delete. This prevents exceptions that would occur if Files.delete were called on a non-empty directory.

webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (1)

6-7: LGTM!

The TSChunkCacheService correctly extends ChunkCacheService and derives maxSizeBytes from the configuration. The DI wiring with @Inject is proper, and the protected visibility of maxSizeBytes matches the contract of the parent class.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)

131-157: Return final persisted version, not just the count (and keep grouping materialized)

Iterator exhaustion is fixed via toSeq, but returning the count alone can mislead callers. Prefer returning the actual last version written so follow-up uploads can chain correctly.

-      updatesGrouped = updateActions.grouped(100).toSeq
+      updatesGrouped = updateActions.grouped(100).toSeq
       _ <- Fox.serialCombined(updatesGrouped.zipWithIndex) {
         case (updateGroup: Seq[UpdateAction], updateGroupIndex) =>
           tracingDataStore.annotationUpdates.put(annotationId.toString,
-                                                 startVersion + updateGroupIndex,
+                                                 startVersion + updateGroupIndex,
                                                  Json.toJson(updateGroup))
       }
-      numberOfSavedVersions = updatesGrouped.length
-    } yield numberOfSavedVersions
+      finalVersion =
+        if (updatesGrouped.isEmpty) startVersion - 1 else startVersion + updatesGrouped.size - 1
+    } yield finalVersion

If the API intentionally needs “number of saved versions”, please align the controller/clients accordingly and rename variables to avoid confusion.

🧹 Nitpick comments (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)

186-189: Validate shape consistency between arrays

Ensure edgeIsAddition length matches edges’ first dimension to avoid silent truncation/mismatch.

-      numEdges <- editedEdgesZarrArray.datasetShape.flatMap(_.headOption).toFox
+      numEdges <- editedEdgesZarrArray.datasetShape.flatMap(_.headOption).toFox
+      numAdds  <- edgeIsAdditionZarrArray.datasetShape.flatMap(_.headOption).toFox
+      _ <- Fox.fromBool(numEdges == numAdds) ?~> "editableMappingFromZip.edgeIsAddition.length.mismatch"
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8fcf0bb and feb8eba.

📒 Files selected for processing (3)
  • app/controllers/AnnotationIOController.scala (3 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala (4 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (2 hunks)
🔇 Additional comments (8)
app/controllers/AnnotationIOController.scala (5)

135-153: LGTM! Return value properly destructured and propagated.

The changes correctly handle the new tuple return type from mergeAndSaveVolumeLayers, destructuring it into mergedVolumeLayers and earliestAccessibleVersion. The earliestAccessibleVersion is appropriately passed to the AnnotationProto constructor, replacing the previous constant 0L.


171-173: LGTM! Function signature and empty case correctly updated.

The return type change to Fox[(List[AnnotationLayer], Long)] is consistent with the need to propagate the earliest accessible version. Returning 0L for the empty case is appropriate, likely indicating no version restrictions when no volume layers are present.


174-181: LGTM! Comprehensive edge-case validations.

The three validation checks effectively prevent problematic scenarios:

  1. Lines 174-176: Prevents duplicate fallback layers within a single annotation using distinctBy, ensuring each volume layer references a unique segmentation layer.
  2. Lines 177-178: Blocks merging when multiple annotations each contain multiple volume layers, avoiding complex multi-dimensional merge logic.
  3. Lines 179-181: Prevents merging annotations with editable mapping edges, as merging proofreading history is complex and unsupported.

These validations are well-reasoned and protect against data corruption.


183-218: Past version counter issue successfully resolved!

The previous review flagged a critical bug where the version counter would reset when saveEditableMappingIfPresent returned 0L, corrupting FossilDB history by overwriting earlier updates.

The current implementation at line 199 correctly uses addition rather than replacement:

_ = layerUpdatesStartVersionMutable = layerUpdatesStartVersionMutable + numberOfSavedVersions

This ensures:

  • When numberOfSavedVersions = 0 (no editable mapping): counter remains unchanged ✓
  • When numberOfSavedVersions > 0: counter advances by the exact number of saved versions ✓

The monotonic version ordering is now preserved across all layers, preventing history corruption.

Example trace:

  • Layer 1: saves updates 1-5 → counter becomes 6
  • Layer 2: no editable mapping → counter stays 6
  • Layer 3: saves updates 6-8 → counter becomes 9

Each layer receives a non-overlapping version range, as intended.


218-218: Verify return value consistency across scenarios.

There's a potential inconsistency in the earliestAccessibleVersion returned:

  • Line 218 (single annotation): Returns layerUpdatesStartVersionMutable, which is 1L if no editable mappings were saved
  • Line 239 (multiple annotations): Returns 0L even though no editable mappings can exist (blocked by line 179-181 validation)

Question: When no editable mappings are saved, should both scenarios return 0L to consistently indicate "no version restrictions"?

The difference might be:

  • 0L = special sentinel for "all versions accessible, no imported history"
  • 1L = "versions start at 1, all accessible"

If both values have the same meaning, this inconsistency is harmless. However, if 0L vs 1L have different semantics when used to restrict version access elsewhere in the codebase, returning 1L for a single annotation without editable mappings could inadvertently restrict access.

Run the following script to understand how earliestAccessibleVersion is used:

Also applies to: 232-239

webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala (2)

34-36: Good extension with KeyValueStoreImplicits

Brings in needed toFox/shiftBox utils; consistent with usage below.


193-211: Harden upload endpoint: parser, logging, cleanup, and return semantics

  • Use a raw body parser with explicit maxLength and wrap in log() for consistency and observability.
  • Ensure the uploaded temp file is deleted after processing to avoid disk leaks.
  • Verify whether the API should return the final persisted version (startVersion + groups - 1) vs just the count; caller expectations may break otherwise.
  • Optionally assert that annotationId actually owns tracingId and that the tracing has editableMapping before writing.

Example adjustments:

-  def saveFromZip(...): Action[AnyContent] =
-    Action.async { implicit request =>
+  def saveFromZip(...): Action[AnyContent] =
+    Action.async(bodyParsers.raw /* consider maxLength */) { implicit request =>
-      accessTokenService.validateAccessFromTokenContext(UserAccessRequest.webknossos) {
-        for {
-          editedEdgesZip <- request.body.asRaw.map(_.asFile).toFox ?~> "zipFile.notFound"
+      log() {
+        accessTokenService.validateAccessFromTokenContext(UserAccessRequest.webknossos) {
+          for {
+            editedEdgesZip <- request.body.asRaw.map(_.asFile).toFox ?~> "zipFile.notFound"
           before = Instant.now
-          numberOfSavedVersions <- editableMappingIOService.initializeFromUploadedZip(tracingId,
+          saved <- editableMappingIOService.initializeFromUploadedZip(tracingId,
                                                                                       annotationId,
                                                                                       startVersion,
                                                                                       baseMappingName,
                                                                                       editedEdgesZip)
           _ = Instant.logSince(before, s"Initializing editable mapping $tracingId from zip")
-        } yield Ok(Json.toJson(numberOfSavedVersions))
-      }
+        } yield {
+          // best-effort cleanup of uploaded temp file
+          try editedEdgesZip.delete() catch { case _: Throwable => () }
+          Ok(Json.toJson(saved))
+        }
+        }
+      }
     }

To confirm what callers expect (count vs final version), run:

webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)

192-225: UpdateAction construction looks correct

Correctly maps additions to Merge and removals to Split with segment IDs, unit mag, and timestamp. Good defaults for optional metadata.

@fm3 fm3 requested a review from MichaelBuessemeyer October 7, 2025 11:19
Copy link
Contributor

@MichaelBuessemeyer MichaelBuessemeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Codewise I didn't find much. A few optional improvements and mainly questions :)

Did not test yet though

None,
None,
None,
chunkCacheService.sharedChunkContentsCache)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔 Here the chunk cache is passed so that the zarr3 array has one as it needs one. But this means as reading the whole array, that without me checking the code, all of the array is loaded into the cache. But imo after the upload this zarr array is never read from again and thus obsolete. Therefore, loading all its content through a cache is as well. So I'd say this is unnecessary and could maybe be replace with a None or so which could turn off the caching in an zarr array or pass a cache which never stores anything 🤔?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, you’re not wrong. I think it is likely we will use the same cache in more spots in the future. Maybe in a future iteration we can also get rid of reading the full array here (though I think it’s usually small enough so that this isn’t a problem). I’d like to avoid changin the interface of AlfuCache nor DatasetArray for this optimization. If it’s ok with you, we could just reduce the datastore default cache size in the config?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok 👍, maybe add a comment to TSChunkCacheService.scala that this cache is pretty much unused at the current state bug might be useful in the future? Just to persist, that we had this discussion and decided to keep the cache althought it doesn't really have a benefit except helping to make the upload of an editbale mapping interface compliant reading a DatasetArray ?


private def buildUpdateActionFromEdge(edgeSrc: Long,
edgeDst: Long,
isAddition: Boolean,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔 if "isAddition" is meaning a merging edge, I think it would be more intuitive to call is isMergingEdge or isMerging or so. Because while reading the previous code up until now, I was unsure what "isAddition" and its zarr array meant.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. I’d like to follow the naming from the file format in this context. However, I added a code comment. I hope it is clearer now?

address = "localhost"
port = 6379
}
cache.chunkCacheMaxSizeBytes = 2000000000 # 2 GB
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I posted before, I think caching the read results from the uploaded editable mappings doesn't yield any benefit imo. In that case reducing the max cache size could leave more free RAM?! (Or not having a cache at all)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (2)

131-159: Iterator exhaustion/off-by-one fixed; confirm return value semantics

Materializing groups with .toSeq resolves the prior iterator exhaustion. Returning numberOfSavedVersions is fine—please confirm callers expect a count, not the last version number.


160-192: Delete the temporary unzip directory; avoid polluting the shared chunk cache

The directory from createDirectory() is never removed. Add guaranteed cleanup to prevent disk leaks. Also consider using a dedicated non-shared/throwaway chunk cache for this one-off bulk read to avoid evicting useful entries.

Apply:

   private def unzipAndReadZarr(editedEdgesZip: File)(implicit ec: ExecutionContext,
                                                      tc: TokenContext): Fox[(MultiArray, MultiArray)] = {
     val unzippedDir = tempFileService.createDirectory()
-    for {
+    def cleanup(): Unit = try tempFileService.deleteDirectory(unzippedDir) catch { case _: Throwable => () }
+    (for {
       _ <- ZipIO
         .unzipToDirectory(editedEdgesZip,
                           unzippedDir,
                           includeHiddenFiles = true,
                           List.empty,
                           truncateCommonPrefix = false,
                           excludeFromPrefix = None)
         .toFox
       unzippedVaultPath = new VaultPath(UPath.fromLocalPath(unzippedDir), FileSystemDataVault.create)
       editedEdgesZarrArray <- Zarr3Array.open(unzippedVaultPath / "edges/",
                                               DataSourceId("dummy", "unused"),
                                               "layer",
                                               None,
                                               None,
                                               None,
                                               chunkCacheService.sharedChunkContentsCache)
       edgeIsAdditionZarrArray <- Zarr3Array.open(unzippedVaultPath / "edgeIsAddition/",
                                                  DataSourceId("dummy", "unused"),
                                                  "layer",
                                                  None,
                                                  None,
                                                  None,
                                                  chunkCacheService.sharedChunkContentsCache)
       numEdges <- editedEdgesZarrArray.datasetShape.flatMap(_.headOption).toFox
       _ <- Fox.fromBool(numEdges.toInt.toLong == numEdges) ?~> "editableMappingFromZip.numEdges.exceedsInt"
       editedEdges <- editedEdgesZarrArray.readAsMultiArray(offset = Array(0L, 0L), shape = Array(numEdges.toInt, 2))
       edgeIsAddition <- edgeIsAdditionZarrArray.readAsMultiArray(offset = 0L, shape = numEdges.toInt)
-    } yield (editedEdges, edgeIsAddition)
+    } yield (editedEdges, edgeIsAddition)).andThen(_ => cleanup())
   }

If possible, replace sharedChunkContentsCache with a small/ephemeral cache for this import to avoid cache churn.

🧹 Nitpick comments (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)

194-226: Name clarity: isAddition → isMergingEdge (or similar)

Rename isAddition to reflect the domain (merge vs split), e.g., isMergingEdge, for readability.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between feb8eba and 4e794e3.

📒 Files selected for processing (6)
  • app/models/annotation/WKRemoteTracingStoreClient.scala (1 hunks)
  • conf/application.conf (1 hunks)
  • conf/messages (1 hunks)
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala (1 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala (4 hunks)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • conf/application.conf
  • conf/messages
  • webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala
🧰 Additional context used
🧬 Code graph analysis (3)
app/models/annotation/WKRemoteTracingStoreClient.scala (3)
util/src/main/scala/com/scalableminds/util/tools/Fox.scala (7)
  • s (236-240)
  • s (240-250)
  • s (250-259)
  • Fox (30-230)
  • Fox (232-305)
  • successful (53-56)
  • failure (58-62)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/rpc/RPCRequest.scala (13)
  • addQueryParam (30-33)
  • addQueryParam (33-36)
  • addQueryParam (36-39)
  • addQueryParam (39-42)
  • addQueryParam (42-45)
  • addQueryParam (45-50)
  • addQueryParam (50-53)
  • addQueryParam (53-56)
  • addQueryParam (56-59)
  • addQueryParam (59-63)
  • addQueryParam (63-68)
  • addQueryParam (68-71)
  • postFileWithJsonResponse (167-170)
app/controllers/UserTokenController.scala (1)
  • RpcTokenHolder (31-39)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala (3)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/services/AccessTokenService.scala (4)
  • webknossos (70-72)
  • validateAccessFromTokenContext (87-94)
  • UserAccessRequest (30-30)
  • UserAccessRequest (31-72)
util/src/main/scala/com/scalableminds/util/time/Instant.scala (2)
  • now (48-48)
  • logSince (72-75)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)
  • initializeFromUploadedZip (131-160)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (9)
util/src/main/scala/com/scalableminds/util/io/ZipIO.scala (2)
  • ZipIO (15-309)
  • zip (70-87)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datavault/VaultPath.scala (1)
  • VaultPath (19-100)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/helpers/UPath.scala (2)
  • UPath (54-96)
  • fromLocalPath (80-80)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (1)
  • TSChunkCacheService (6-8)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TsTempFileService.scala (1)
  • TsTempFileService (8-11)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/TracingDataStore.scala (1)
  • TracingDataStore (13-66)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingService.scala (1)
  • create (134-141)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3Array.scala (3)
  • Zarr3Array (16-40)
  • Zarr3Array (42-184)
  • open (18-40)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/DatasetArray.scala (1)
  • readAsMultiArray (189-192)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: build-smoketest-push
  • GitHub Check: backend-tests
  • GitHub Check: frontend-tests
🔇 Additional comments (2)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/controllers/EditableMappingController.scala (2)

5-5: LGTM: timing + KeyValueStoreImplicits integration

Using Instant for timing and mixing in KeyValueStoreImplicits looks correct and consistent with surrounding code.

Also applies to: 21-22, 34-36


193-211: Here’s the context around the parser settings in conf/application.conf:

$ sed -n '1,30p' conf/application.conf
play.http.parser {
  maxMemoryBuffer = 250M
  maxDiskBuffer   = 1G
}
play.http.fileMimeTypes = ${play.http.fileMimeTypes}

The defaults here apply to all HTTP body parsers (including raw). Since you’re using the raw body parser via request.body.asRaw, these limits (250 MB in-memory, 1 GB on disk) already cover large ZIP uploads.

Review comment update

Confirm existing parser limits suffice for ZIP uploads

Your global settings in conf/application.conf already configure Play’s raw-body parser to allow up to 250 MB in memory and spill to disk up to 1 GB. No further changes to parser configuration are required unless you anticipate uploads larger than these thresholds.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
app/controllers/AnnotationIOController.scala (1)

223-243: Consider clarifying the comment for better readability.

The multi-annotation merge path correctly returns (List(...), 0L) as no editable mappings are present. The comment could be slightly more concise.

Optional improvement:

-    } else { // Multiple annotations with volume layers (but at most one each) were uploaded, they have no editable mappings. Merge those volume layers into one
+    } else { // Multiple annotations (each with at most one volume layer, none with editable mappings) were uploaded. Merge those volume layers into one.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4e794e3 and 3ef0cc5.

📒 Files selected for processing (1)
  • app/controllers/AnnotationIOController.scala (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
app/controllers/AnnotationIOController.scala (3)
util/src/main/scala/com/scalableminds/util/tools/Fox.scala (8)
  • map (281-284)
  • map (377-377)
  • Fox (30-230)
  • Fox (232-305)
  • successful (53-56)
  • failure (58-62)
  • serialCombined (95-99)
  • serialCombined (99-111)
app/models/annotation/WKRemoteTracingStoreClient.scala (2)
  • saveEditableMappingIfPresent (252-269)
  • saveVolumeTracing (227-250)
app/models/annotation/AnnotationUploadService.scala (2)
  • getEditableMappingEdgesZipFrom (32-35)
  • getDataZipFrom (29-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: backend-tests
  • GitHub Check: build-smoketest-push
  • GitHub Check: frontend-tests
🔇 Additional comments (6)
app/controllers/AnnotationIOController.scala (6)

166-169: LGTM! Helper function correctly detects duplicate fallback layers.

The logic filters layers with fallback layers and compares the count to the distinct count, which accurately identifies duplicates.


177-178: LGTM! Correct return value for empty case.

Returns (List(), 0L) matching the new signature where 0L indicates no version history.


179-180: LGTM! Validation prevents data corruption from duplicate fallback layers.

Correctly rejects annotations where multiple volume layers share the same fallback segmentation layer, which would lead to ambiguity.


187-222: LGTM! Version counter logic correctly addresses the past critical issue.

The implementation properly tracks update action versions across layers:

  1. Initializes layerUpdatesStartVersionMutable to 1L (first version)
  2. Calls saveEditableMappingIfPresent with the current version as startVersion
  3. Increments counter by numberOfSavedVersions (0 when no mapping present)
  4. Returns final counter value as earliestAccessibleVersion (next available version, marking imported history as inaccessible)

The use of Fox.serialCombined ensures sequential processing with no race conditions. When numberOfSavedVersions is 0, the counter correctly stays unchanged rather than resetting.


135-153: LGTM! Proper usage of the new return signature.

The calling code correctly unpacks (mergedVolumeLayers, earliestAccessibleVersion) and passes earliestAccessibleVersion to AnnotationProto to mark imported history as inaccessible, while using the pre-generated newAnnotationId for annotation creation.


183-185: LGTM! Critical validation prevents unsupported merge scenarios. Message key annotation.upload.editableMappingIncompleteInformation exists in conf/messages.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (1)

164-196: Temp directory leak: ensure unzip directory is deleted

The directory from createDirectory() isn’t deleted. Large uploads will accumulate on disk. Always clean up after opening/reading Zarr.

   private def unzipAndReadZarr(editedEdgesZip: File)(implicit ec: ExecutionContext,
                                                      tc: TokenContext): Fox[(MultiArray, MultiArray)] = {
-    val unzippedDir = tempFileService.createDirectory()
-    for {
+    val unzippedDir = tempFileService.createDirectory()
+    val unzippedUPath = UPath.fromLocalPath(unzippedDir)
+    def cleanup(): Unit = try unzippedUPath.deleteRecursively() catch { case _: Throwable => () }
+    (for {
       _ <- ZipIO
         .unzipToDirectory(editedEdgesZip,
                           unzippedDir,
                           includeHiddenFiles = true,
                           List.empty,
                           truncateCommonPrefix = false,
                           excludeFromPrefix = None)
         .toFox
-      unzippedVaultPath = new VaultPath(UPath.fromLocalPath(unzippedDir), FileSystemDataVault.create)
+      unzippedVaultPath = new VaultPath(unzippedUPath, FileSystemDataVault.create)
       editedEdgesZarrArray <- Zarr3Array.open(unzippedVaultPath / "edges/",
                                               DataSourceId("dummy", "unused"),
                                               "layer",
                                               None,
                                               None,
                                               None,
                                               chunkCacheService.sharedChunkContentsCache)
       edgeIsAdditionZarrArray <- Zarr3Array.open(unzippedVaultPath / "edgeIsAddition/",
                                                  DataSourceId("dummy", "unused"),
                                                  "layer",
                                                  None,
                                                  None,
                                                  None,
                                                  chunkCacheService.sharedChunkContentsCache)
-      numEdges <- editedEdgesZarrArray.datasetShape.flatMap(_.headOption).toFox
-      _ <- Fox.fromBool(numEdges.toInt.toLong == numEdges) ?~> "editableMappingFromZip.numEdges.exceedsInt"
-      editedEdges <- editedEdgesZarrArray.readAsMultiArray(offset = Array(0L, 0L), shape = Array(numEdges.toInt, 2))
-      edgeIsAddition <- edgeIsAdditionZarrArray.readAsMultiArray(offset = 0L, shape = numEdges.toInt)
-    } yield (editedEdges, edgeIsAddition)
+      numEdges <- editedEdgesZarrArray.datasetShape.flatMap(_.headOption).toFox
+      _ <- Fox.fromBool(numEdges.toInt.toLong == numEdges) ?~> "editableMappingFromZip.numEdges.exceedsInt"
+      edgeFlagsCount <- edgeIsAdditionZarrArray.datasetShape.flatMap(_.headOption).toFox
+      _ <- Fox.fromBool(edgeFlagsCount == numEdges) ?~> "editableMappingFromZip.arrayLengthMismatch"
+      editedEdges <- editedEdgesZarrArray.readAsMultiArray(offset = Array(0L, 0L), shape = Array(numEdges.toInt, 2))
+      edgeIsAddition <- edgeIsAdditionZarrArray.readAsMultiArray(offset = 0L, shape = numEdges.toInt)
+    } yield (editedEdges, edgeIsAddition)).andThen { _ => cleanup() }
   }
🧹 Nitpick comments (2)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (2)

142-146: Nit: fix comment typo and clarity

“splict” -> “split”. Consider explicitly stating “edgeIsAddition == true => merge; false => split”.


177-190: Optional: avoid polluting shared chunk cache for one-shot reads

Opening Zarr arrays with sharedChunkContentsCache for a one-time full read can evict useful entries. If supported, pass a no-op/ephemeral cache or disable caching for this path.

Is there a lightweight cache option (e.g., a small LRU or NoCache) we can pass to Zarr3Array.open for single-use reads?

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 26fe3f8 and 06975e5.

📒 Files selected for processing (1)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (9)
util/src/main/scala/com/scalableminds/util/io/ZipIO.scala (2)
  • ZipIO (15-309)
  • zip (70-87)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datavault/VaultPath.scala (1)
  • VaultPath (19-100)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/helpers/UPath.scala (2)
  • UPath (54-96)
  • fromLocalPath (80-80)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (1)
  • TSChunkCacheService (6-8)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/files/TsTempFileService.scala (1)
  • TsTempFileService (8-11)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingService.scala (1)
  • create (134-141)
util/src/main/scala/com/scalableminds/util/time/Instant.scala (1)
  • now (48-48)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/zarr3/Zarr3Array.scala (3)
  • Zarr3Array (16-40)
  • Zarr3Array (42-184)
  • open (18-40)
webknossos-datastore/app/com/scalableminds/webknossos/datastore/datareaders/DatasetArray.scala (1)
  • readAsMultiArray (189-192)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: frontend-tests
  • GitHub Check: backend-tests
  • GitHub Check: build-smoketest-push
🔇 Additional comments (2)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/tracings/editablemapping/EditableMappingIOService.scala (2)

211-212: Verify mag=1 assumption

Using Vec3Int.ones presumes mag 1 exists. If the dataset lacks mag 1, downstream consumers may misinterpret positions. Please confirm this is acceptable for update actions, or derive the minimum available mag.


154-162: Clarify return value of initializeFromUploadedZip

The method currently yields the count of saved version‐batches. If clients expect the final version number (startVersion + count – 1) instead, update the implementation and API contract accordingly; otherwise keep returning the batch count. Confirm the intended behavior for initializeFromUploadedZip.

Copy link
Contributor

@MichaelBuessemeyer MichaelBuessemeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Testing went well besides

Test with multiple proofreading layers in one annotation

I was unable to create an annotation with multiple layers and each having an editable mapping active with at least one proofreading action done. I am only able to active the mapping for the second layer but the proofreading tool is not selectable. So not sure how to test this case to be honest

None,
None,
None,
chunkCacheService.sharedChunkContentsCache)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok 👍, maybe add a comment to TSChunkCacheService.scala that this cache is pretty much unused at the current state bug might be useful in the future? Just to persist, that we had this discussion and decided to keep the cache althought it doesn't really have a benefit except helping to make the upload of an editbale mapping interface compliant reading a DatasetArray ?

@fm3
Copy link
Member Author

fm3 commented Oct 13, 2025

You’re right, sorry, I removed this “step to test” now, I also found out that this is not a supported case anyway.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (1)

6-10: Consider tracking the future work more formally.

The comment provides helpful context about the current limited usage and small cache size. However, the directive to "remove this comment and increase the size" when more usages are added could be tracked more formally (e.g., as a GitHub issue or TODO comment) to ensure it's not forgotten.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 899d097 and 34d7248.

📒 Files selected for processing (1)
  • webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (1)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TracingStoreConfig.scala (2)
  • Tracingstore (14-32)
  • Cache (28-30)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: backend-tests
  • GitHub Check: build-smoketest-push
  • GitHub Check: frontend-tests
🔇 Additional comments (2)
webknossos-tracingstore/app/com/scalableminds/webknossos/tracingstore/TSChunkCacheService.scala (2)

1-4: LGTM!

Package declaration and imports are correct and minimal.


11-13: LGTM!

The implementation is minimal and correct, properly using dependency injection to read the cache size from configuration.

Copy link
Contributor

@MichaelBuessemeyer MichaelBuessemeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I tested last time I reviewed and nothing really changed, I think retesting doesn't make sense.

As everything worked last time 🟢

@fm3 fm3 enabled auto-merge (squash) October 15, 2025 09:01
@fm3 fm3 merged commit 4ea0239 into master Oct 15, 2025
5 checks passed
@fm3 fm3 deleted the upload-editable-mapping branch October 15, 2025 09:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Reupload exported editableMapping annotations

2 participants